120 research outputs found

    Severe Language Effect in University Rankings: Particularly Germany and France are wronged in citation-based rankings

    Get PDF
    We applied a set of standard bibliometric indicators to monitor the scientific state-of-arte of 500 universities worldwide and constructed a ranking on the basis of these indicators (Leiden Ranking 2010). We find a dramatic and hitherto largely underestimated language effect in the bibliometric, citation-based measurement of research performance when comparing the ranking based on all Web of Science (WoS) covered publications and on only English WoS covered publications, particularly for Germany and France.Comment: Short communication, 3 pages, 4 figure

    Towards a new crown indicator: an empirical analysis

    Get PDF
    We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care

    Self-citations at the meso and individual levels: effects of different calculation methods

    Get PDF
    This paper focuses on the study of self-citations at the meso and micro (individual) levels, on the basis of an analysis of the production (1994–2004) of individual researchers working at the Spanish CSIC in the areas of Biology and Biomedicine and Material Sciences. Two different types of self-citations are described: author self-citations (citations received from the author him/herself) and co-author self-citations (citations received from the researchers’ co-authors but without his/her participation). Self-citations do not play a decisive role in the high citation scores of documents either at the individual or at the meso level, which are mainly due to external citations. At micro-level, the percentage of self-citations does not change by professional rank or age, but differences in the relative weight of author and co-author self-citations have been found. The percentage of co-author self-citations tends to decrease with age and professional rank while the percentage of author self-citations shows the opposite trend. Suppressing author self-citations from citation counts to prevent overblown self-citation practices may result in a higher reduction of citation numbers of old scientists and, particularly, of those in the highest categories. Author and co-author self-citations provide valuable information on the scientific communication process, but external citations are the most relevant for evaluative purposes. As a final recommendation, studies considering self-citations at the individual level should make clear whether author or total self-citations are used as these can affect researchers differently

    The impact of Cochrane Systematic Reviews : a mixed method evaluation of outputs from Cochrane Review Groups supported by the UK National Institute for Health Research

    Get PDF
    © 2014 Bunn et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.Background: There has been a growing emphasis on evidence-informed decision making in health care. Systematic reviews, such as those produced by the Cochrane Collaboration, have been a key component of this movement. The UK National Institute for Health Research (NIHR) Systematic Review Programme currently supports 20 Cochrane Review Groups (CRGs). The aim of this study was to identify the impacts of Cochrane reviews published by NIHR funded CRGs during the years 2007-11. Methods: We sent questionnaires to CRGs and review authors, interviewed guideline developers and used bibliometrics and documentary review to get an overview of CRG impact and to evaluate the impact of a sample of 60 Cochrane reviews. We used a framework with four categories (knowledge production, research targeting, informing policy development, and impact on practice/services). Results: A total of 1502 new and updated reviews were produced by the 20 NIHR funded CRGs between 2007-11. The clearest impacts were on policy with a total of 483 systematic reviews cited in 247 sets of guidance; 62 were international, 175 national (87 from the UK) and 10 local. Review authors and CRGs provided some examples of impact on practice or services, for example safer use of medication, the identification of new effective drugs or treatments and potential economic benefits through the reduction in the use of unproven or unnecessary procedures. However, such impacts are difficult to objectively document and the majority of reviewers were unsure if their review had produced specific impacts. Qualitative data suggested that Cochrane reviews often play an instrumental role in informing guidance although a poor fit with guideline scope or methods, reviews being out of date and a lack of communication between CRGs and guideline developers were barriers to their use. Conclusions: Health and economic impacts of research are generally difficult to measure. We found that to be the case with this evaluation. Impacts on knowledge production and clinical guidance were easier to identify and substantiate than those on clinical practice. Questions remain about how we define and measure impact and more work is needed to develop suitable methods for impact analysis.Peer reviewe

    A recursive field-normalized bibliometric performance indicator: An application to the field of library and information science

    Get PDF
    Two commonly used ideas in the development of citation-based research performance indicators are the idea of normalizing citation counts based on a field classification scheme and the idea of recursive citation weighing (like in PageRank-inspired indicators). We combine these two ideas in a single indicator, referred to as the recursive mean normalized citation score indicator, and we study the validity of this indicator. Our empirical analysis shows that the proposed indicator is highly sensitive to the field classification scheme that is used. The indicator also has a strong tendency to reinforce biases caused by the classification scheme. Based on these observations, we advise against the use of indicators in which the idea of normalization based on a field classification scheme and the idea of recursive citation weighing are combined

    Can Microsoft Academic be used for citation analysis of preprint archives? The case of the Social Science Research Network

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 07/03/2018, available online: https://doi.org/10.1007/s11192-018-2704-z The accepted version of the publication may differ from the final published version.Preprint archives play an important scholarly communication role within some fields. The impact of archives and individual preprints are difficult to analyse because online repositories are not indexed by the Web of Science or Scopus. In response, this article assesses whether the new Microsoft Academic can be used for citation analysis of preprint archives, focusing on the Social Science Research Network (SSRN). Although Microsoft Academic seems to index SSRN comprehensively, it groups a small fraction of SSRN papers into an easily retrievable set that has variations in character over time, making any field normalisation or citation comparisons untrustworthy. A brief parallel analysis of arXiv suggests that similar results would occur for other online repositories. Systematic analyses of preprint archives are nevertheless possible with Microsoft Academic when complete lists of archive publications are available from other sources because of its promising coverage and citation results

    Societal output and use of research performed by health research groups

    Get PDF
    The last decade has seen the evaluation of health research pay more and more attention to societal use and benefits of research in addition to scientific quality, both in qualitative and quantitative ways. This paper elaborates primarily on a quantitative approach to assess societal output and use of research performed by health research groups (societal quality of research). For this reason, one of the Dutch university medical centres (i.e. the Leiden University Medical Center (LUMC)) was chosen as the subject of a pilot study, because of its mission to integrate top patient care with medical, biomedical and healthcare research and education. All research departments were used as units of evaluation within this university medical centre

    Evaluating Research and Impact: A Bibliometric Analysis of Research by the NIH/NIAID HIV/AIDS Clinical Trials Networks

    Get PDF
    Evaluative bibliometrics uses advanced techniques to assess the impact of scholarly work in the context of other scientific work and usually compares the relative scientific contributions of research groups or institutions. Using publications from the National Institute of Allergy and Infectious Diseases (NIAID) HIV/AIDS extramural clinical trials networks, we assessed the presence, performance, and impact of papers published in 2006–2008. Through this approach, we sought to expand traditional bibliometric analyses beyond citation counts to include normative comparisons across journals and fields, visualization of co-authorship across the networks, and assess the inclusion of publications in reviews and syntheses. Specifically, we examined the research output of the networks in terms of the a) presence of papers in the scientific journal hierarchy ranked on the basis of journal influence measures, b) performance of publications on traditional bibliometric measures, and c) impact of publications in comparisons with similar publications worldwide, adjusted for journals and fields. We also examined collaboration and interdisciplinarity across the initiative, through network analysis and modeling of co-authorship patterns. Finally, we explored the uptake of network produced publications in research reviews and syntheses. Overall, the results suggest the networks are producing highly recognized work, engaging in extensive interdisciplinary collaborations, and having an impact across several areas of HIV-related science. The strengths and limitations of the approach for evaluation and monitoring research initiatives are discussed
    corecore